-
Notifications
You must be signed in to change notification settings - Fork 956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make shielded syncing a separate command #2422
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for these changes, I think I understand the general flow. But maybe I am missing some of the higher level considerations that motivated this approach to separating MASP synchronization into a separate command. Some questions:
- Have alternatives to interrupts been considered? Like continuously saving/appending new transactions to a file?
- Given interrupt usage and SyncStatus modality, will the MASP functionality be easily usable from the SDK on all platforms (including the web)?
- How do the
ShieldedContext
lock guard in the SDK andSyncStatus
marker type interact? Is there any overlap or redundancy here? - How do we handle shielded context file consistency when multiple clients (from separate processes) are running?
@@ -1471,6 +1463,22 @@ pub struct IndexedTx { | |||
pub index: TxIndex, | |||
} | |||
|
|||
impl PartialOrd for IndexedTx { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between this ordering of IndexedTx
and the one from #[derive(Ord, PartialOrd)]
? Isn't the latter also lexicographic? No?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't trust the derived ord (especially if this struct changes in the future) and I wanted to be very explicit to other devs since this ordering is important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's leave a note about it (or a test)
|
@@ -187,7 +187,7 @@ fn run_ledger_ibc() -> Result<()> { | |||
} | |||
|
|||
#[test] | |||
fn run_ledger_ibc_with_hermes() -> Result<()> { | |||
fn drun_ledger_ibc_with_hermes() -> Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small typo here
// Describe how a Transfer simply subtracts from one | ||
// account and adds the same to another | ||
tx_ctx.scan_tx(*indexed_tx, *epoch, tx, stx)?; | ||
self.unscanned.txs.remove(indexed_tx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This process of removing transactions starting from the oldest and going forwards in time could be interrupted, right? If this were to happen, then the very first accepted transactions would not be in self.unscanned.txs
from the point that break
is called and afterwards. Right? My question is: if the client were then to again add more new unknown keys to their wallet, would fetch
start the scan from the very beginning? Or would it start the rescan at the transaction that was interrupted (despite the keys being new/unknown up till this point)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is part of the bug I mentioned above. I have fixed this. If we have new keys, we need to scan from 0 to self.last_fetched
. If some of those blocks, are already in the local cache, we can skip fetching them.
std::mem::swap(&mut self.unscanned.txs, &mut txs); | ||
let txs = ProgressLogging::new(txs, io, ProgressType::Scan); | ||
for (indexed_tx, (epoch, tx, stx)) in txs { | ||
if self.interrupted() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If an interrupt happens here, would self.unscanned.txs
be left in a state where it's missing the very first transactions? If this were the case, would this hamper the client's future ability to scan the very first transactions with viewing keys that are not yet known at this point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the cache is cleared out. If new keys appear later, we have to repopoulate this cache.
last_query_height, | ||
) | ||
.await?; | ||
self.unscanned.txs.extend(fetched); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before this line, is self.unscanned.txs
supposed to contain all transactions up till self.latest_unscanned()
? After this line, is self.unscanned.txs
supposed to contain all transactions up till last_query_height
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.latest_unscanned()
is supposed to tell us the most recent block height in the local cache. However, there is a bug here. We need to fetch from zero up to self.last_fetched
since we are trying to sync up new keys with existing keys.
I was thinking that there could be alternatives to interrupts other than daemon processes. Like for instance, we could do things as before but atomically commit the |
I see, this is a valid approach. That being said, though it may be undesirable in to do a sync just before displaying balances in our client. I wouldn't say that it is necessarily illogical (to the point of being a type error) for users of the SDK to interleave syncing and non-syncing operations for their own applications. |
When working with you on the note scanning algorithm, I noticed two things:
In light of the above, it may be worth reconsidering if and how we do If the |
Closing since this has already been done. |
Describe your changes
Previously, the scanning of MASP notes was done as part of various MASP commands. This moves that logic into a separate command so that it can be done out of band.
Indicate on which release or other PRs this topic is based on
#2363
Checklist before merging to
draft